Alessandro Filazzola, Batbaatar Amgaa, Charlotte Brown, Issac Heida, Jessica Grenke, Margarete Dettlaff, Tan Bao, & JC Cahill
library(tidyverse)
library(PRISMAstatement)
Conduct a meta-analysis of the literature testing the indirect effects of grazing on animal taxa’s through the direct effects on the plant community.
| date | task |
|---|---|
| Nov 9 | Establish search terms to be used in the meta-analysis |
| Nov 12 | Compile list of journal artcles and sub-divide for each researcher |
| Nov 14 | Begin reviewing papers and extracting data |
| Jan 28 | Complete data extraction from papers |
| Feb 11 | Complete preliminary analysis and set structure for MS |
| Feb 25 | Settle on analyses to be used and begin writing manuscript |
| March 11 | Complete first draft of MS and pass to co-authors |
| March 25 | Comments passed back on draft |
| April 2 | Complete revisions and submit to journal |
A systematic literature search was conducted using Web of Science for all emperical research articles. The review will include all studies globally. The intended purpose of this search is to capture all articles that have documented grazing either along a gradient (e.g. different frequencies or intensity) or presence/absence (e.g. excluded, ungrazed, or retired ranch lands). We condcuted two separate searches to capture studies that tested gradients and studies that compared grazing to ungrazed treatments. Duplicate articles between the searches were removed. We also intentionally excluded terms that resulted in articles not relevant to the purpose of this study including: review, synthesis, policy, social, carbon, and fish. The search terms that used were:
Search A graz* OR livestock AND exclosure* OR exclusion OR exclude* OR ungrazed OR retire* OR fallow*
Search B grazing intensity OR grazing gradient OR stocking rate OR rotation* grazing
This steps includes a. checking for duplicating, b. reviewing each instance for relevancy, c. consistently identifying and documenting exclusion criteria. Outcomes include a list of publications to be used for synthesis, a library of pdfs, and a PRISMA report to ensure the worflow is transparent and reproducible. Papers were excluded with the following characteristics:
evidence <- read.csv("data//synthesisdata//evidence.csv")
### Identify studies that were excluded
excludes <- evidence %>% group_by(reason.simplified) %>% count(exclude) %>% filter(reason.simplified!="")
ggplot(excludes, aes(x=reason.simplified, y=n)) + geom_bar(stat="identity") + coord_flip()
## frequency of study
year.rate <- evidence %>% group_by(Publication.Year) %>% summarize(n=length(Publication.Year))
ggplot(tail(year.rate,30)) + geom_bar(aes(x=Publication.Year, y=n), stat="identity") + ylab("number of published studies") +xlab("year published") +theme(text = element_text(size=16))
## total number of papers found
nrow(evidence)
## [1] 2989
## number of papers found outside of WoS
other <- read.csv("data/synthesisdata//other.sources.csv")
nrow(other)
## [1] 0
## number of articles excluded
excludes <- evidence %>% filter(exclude=="yes")
nrow(excludes)
## [1] 2679
## relevant papers
review <- evidence %>% filter(exclude!="yes")
nrow(review)
## [1] 310
## papers for meta
datasets <- read.csv("data//binary.simplified.csv")
meta <- length(unique(datasets$uniqueID))
meta
## [1] 216
prisma(found = 2989,
found_other = 1,
no_dupes = 2989,
screened = 2989,
screen_exclusions = 2675,
full_text = 315,
full_text_exclusions = 0,
qualitative = 315,
quantitative = 216,
width = 800, height = 800)
## Warning in prisma(found = 2989, found_other = 1, no_dupes = 2989, screened
## = 2989, : After screening exclusions, a different number of remaining full-
## text articles is stated.
require(ggmap)
### Start with base map of world
mp <- NULL
mapWorld <- borders("world", colour="gray50", fill="gray50") # create a layer of borders
mp <- ggplot() + mapWorld
## colorblind-friendly palette
cbPalette <- c("#999999", "#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7","#000000")
meta <- read.csv("data//synthesisdata//evidence.csv")
meta <- meta %>% filter(lat != "") ## remove blanks for lat and lon
meta <- meta[!grepl(";", meta$lat),] ## remove multiple entries
meta$lat <- as.numeric(levels(meta$lat))[meta$lat] ## convert from factor to number
meta$lon <- as.numeric(levels(meta$lon))[meta$lon] ## convert from factor to number
## plot points on top
mp <- mp+ geom_point(data=meta , aes(x=lon, y=lat), size=2)
mp
meta <- read.csv("data//binary.simplified.csv")
## Load packages and functions
library(reshape2)
## Warning: package 'reshape2' was built under R version 3.5.2
##
## Attaching package: 'reshape2'
## The following object is masked from 'package:tidyr':
##
## smiths
library(metafor)
## Warning: package 'metafor' was built under R version 3.5.2
## Loading required package: Matrix
##
## Attaching package: 'Matrix'
## The following object is masked from 'package:tidyr':
##
## expand
## Loading 'metafor' package (version 2.0-0). For an overview
## and introduction to the package please type: help(metafor).
source("meta.eval.r") ## Multiple aggregate
## Simplify the estimates column
meta2 <- meta
est <- read.csv("data//Unique.Estimates.Column.csv")
meta2 <- merge(meta2, est, by="Estimate")
## join manuscript data
ms.data <- read.csv("data//synthesisdata//manuscript.data.csv")
meta2 <- merge(meta2, ms.data, by="uniqueID")
## subset for abundance only
meta2 <- meta2 %>% filter(est.reduced == "abundance")
## Create Unique identifier column
meta2[,"UniqueSite"] <- paste(meta2$uniqueID, meta2$grazer.simplified, meta2$Taxa.1, meta2$Functional.Group, meta2$estimate.simplified, sep="-")
## convert se to sd
meta2[meta2$Stat=="se","Value"] <- meta2[meta2$Stat=="se","Value"] * 1.96
meta2[meta2$Stat=="se","Stat"] <- "sd"
## drop comparisons that are not pairwise
meta2 <- meta2 %>% filter(grazing.compare.simple == "ungrazed" | grazing.compare.simple == "grazed")
meta2 <- meta2 %>% filter(uniqueID != "30") ## drop study 30 because anomalous
## Drop plants
meta2 <- meta2 %>% filter(Taxa.1 != "Plants") ## examine animals only
## Use function to extract summary statistics for comparisons
## meta.eval arguments are (meta.data, compare, ids , stats)
grazed.compare <- meta.eval(meta2, grazing.compare.simple, UniqueSite, Stat)
## Combine the lists into same dataframe
## Rename Columns in second dataframe
grazed.stat <- grazed.compare [[2]] ## extracted statistics
names(grazed.stat) <- c("UniqueSite","grazed_mean","grazed_sd","ungrazed_mean","ungrazed_sd","grazed_n","ungrazed_n") ## rename columns to match
grazed.raw <- grazed.compare[[1]] ## calculated statistics from raw values
## Join two dataframes
meta.stat <- rbind(grazed.raw, grazed.stat[, names(grazed.raw)])
meta.ready <- escalc(n1i = ungrazed_n, n2i = grazed_n, m1i = ungrazed_mean, m2i = grazed_mean, sd1i = ungrazed_sd, sd2i = grazed_sd, data = meta.stat, measure = "SMD", append = TRUE)
## Warning in ifelse(mi <= 1, NA, exp(lgamma(mi/2) - log(sqrt(mi/2)) -
## lgamma((mi - : value out of range in 'lgamma'
## Warning in ifelse(mi <= 1, NA, exp(lgamma(mi/2) - log(sqrt(mi/2)) -
## lgamma((mi - : value out of range in 'lgamma'
## Warning in ifelse(mi <= 1, NA, exp(lgamma(mi/2) - log(sqrt(mi/2)) -
## lgamma((mi - : value out of range in 'lgamma'
## Warning in ifelse(mi <= 1, NA, exp(lgamma(mi/2) - log(sqrt(mi/2)) -
## lgamma((mi - : value out of range in 'lgamma'
## Warning in ifelse(mi <= 1, NA, exp(lgamma(mi/2) - log(sqrt(mi/2)) -
## lgamma((mi - : value out of range in 'lgamma'
## Warning in ifelse(mi <= 1, NA, exp(lgamma(mi/2) - log(sqrt(mi/2)) -
## lgamma((mi - : value out of range in 'lgamma'
## Warning in ifelse(mi <= 1, NA, exp(lgamma(mi/2) - log(sqrt(mi/2)) -
## lgamma((mi - : value out of range in 'lgamma'
## Warning in ifelse(mi <= 1, NA, exp(lgamma(mi/2) - log(sqrt(mi/2)) -
## lgamma((mi - : value out of range in 'lgamma'
## Warning in ifelse(mi <= 1, NA, exp(lgamma(mi/2) - log(sqrt(mi/2)) -
## lgamma((mi - : value out of range in 'lgamma'
## Warning in escalc.default(n1i = ungrazed_n, n2i = grazed_n, m1i =
## ungrazed_mean, : Some 'yi' and/or 'vi' values equal to +-Inf. Recoded to
## NAs.
## clean up meta.ready
meta.ready <- na.omit(meta.ready) ## drop NAs
meta.ready <- meta.ready[meta.ready$grazed_mean<80,] ## Drop extreme high variance
## separate out the identifiers
siteID <- matrix(unlist(strsplit(meta.ready$UniqueSite,"-")),ncol=5, byrow=TRUE)
siteID <- data.frame(siteID) ## recreate as dataframe
colnames(siteID) <- c("Study","grazer","taxa","FG","measure") ## add column names
meta.ready <- cbind(data.frame(meta.ready), siteID)
#random-effects meta-analysis for grazed vs ungrazed plots
m1 <- rma(yi=yi, vi=vi, data = meta.ready)
summary(m1)
##
## Random-Effects Model (k = 130; tau^2 estimator: REML)
##
## logLik deviance AIC BIC AICc
## -202.4514 404.9028 408.9028 414.6224 408.9980
##
## tau^2 (estimated amount of total heterogeneity): 1.0059 (SE = 0.1594)
## tau (square root of estimated tau^2 value): 1.0029
## I^2 (total heterogeneity / total variability): 87.83%
## H^2 (total variability / sampling variability): 8.22
##
## Test for Heterogeneity:
## Q(df = 129) = 990.4855, p-val < .0001
##
## Model Results:
##
## estimate se zval pval ci.lb ci.ub
## 0.1637 0.1000 1.6377 0.1015 -0.0322 0.3596
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## get last year grazed as covariate
yr.grazed <- meta2 %>% group_by(uniqueID) %>% summarize(yr=max(yr.grazed))
colnames(yr.grazed)[1] <- "Study"
site.yr <- merge(siteID, yr.grazed, by="Study")
meta.ready[,"yr.grazed"] <- site.yr$yr
#mixed-effects meta-analysis for grazed vs ungrazed plots
m2 <- rma(yi=yi, vi=vi, mods=~FG+grazer-1, data = meta.ready)
summary(m2)
##
## Mixed-Effects Model (k = 130; tau^2 estimator: REML)
##
## logLik deviance AIC BIC AICc
## -183.8052 367.6105 389.6105 420.2729 392.0549
##
## tau^2 (estimated amount of residual heterogeneity): 0.8741 (SE = 0.1480)
## tau (square root of estimated tau^2 value): 0.9349
## I^2 (residual heterogeneity / unaccounted variability): 85.23%
## H^2 (unaccounted variability / sampling variability): 6.77
##
## Test for Residual Heterogeneity:
## QE(df = 120) = 655.0038, p-val < .0001
##
## Test of Moderators (coefficient(s) 1:10):
## QM(df = 10) = 23.0248, p-val = 0.0107
##
## Model Results:
##
## estimate se zval pval ci.lb ci.ub
## FGAll -0.6882 0.3435 -2.0031 0.0452 -1.3615 -0.0148 *
## FGDetrivorous -0.3164 0.5387 -0.5874 0.5569 -1.3724 0.7395
## FGHerbivorous -0.8363 0.3162 -2.6450 0.0082 -1.4560 -0.2166 **
## FGMycorrhiza -1.1851 0.5082 -2.3320 0.0197 -2.1812 -0.1891 *
## FGOmnivorous -0.6005 0.4120 -1.4576 0.1450 -1.4081 0.2070
## FGParasitic -0.9752 0.4240 -2.2999 0.0215 -1.8063 -0.1441 *
## FGPollinator -0.3689 0.5131 -0.7189 0.4722 -1.3746 0.6368
## FGPredator -0.7248 0.3696 -1.9611 0.0499 -1.4493 -0.0004 *
## grazerdomestic 1.0437 0.3103 3.3635 0.0008 0.4355 1.6518 ***
## grazerindigenous 0.9387 0.3780 2.4831 0.0130 0.1978 1.6796 *
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## compare inclusion of moderators
(.9539-.9352)/.9352 ## explains an additional 2%
## [1] 0.01999572
## Produce a forest plot to determine the effect sizes for each study
forest(m2)
confint(m1)
##
## estimate ci.lb ci.ub
## tau^2 1.0059 0.7593 1.4444
## tau 1.0029 0.8714 1.2018
## I^2(%) 87.8345 84.4959 91.2030
## H^2 8.2199 6.4499 11.3675
## Check for publication bias
## The symetrical distriubtion suggests there is no publication bias
funnel(m2)
## Calculate rosenthals Failsafe number
fsn(yi, vi, data=meta.ready)
##
## Fail-safe N Calculation Using the Rosenthal Approach
##
## Observed Significance Level: 0.0003
## Target Significance Level: 0.05
##
## Fail-safe N: 441
# ## generate plot with spaces inbetween
# forest(m1, atransf=exp, cex=0.75, ylim=c(-1, 24),
# order=order(meta.ready$GI.compare,meta.ready$taxa), rows=c(3:4,7,10:16,19:21),
# # mlab="RE model for all studies", psize=1, slab= paste(meta.ready$Study, meta.ready$taxa, meta.ready$measure))
#
# addpoly(res.w, row=18, cex=0.75, atransf=exp, mlab="RE model for green wall")
# addpoly(res.r, row= 9, cex=0.75, atransf=exp, mlab="RE model for green roof")
# addpoly(res.rd, row= 6, cex=0.75, atransf=exp, mlab="RE model for roadsides")
# addpoly(res.p, row= 2, cex=0.75, atransf=exp, mlab="RE model for retention ponds")